4 research outputs found
Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection
Over the past decade, deep neural networks (DNNs) have demonstrated
remarkable performance in a variety of applications. As we try to solve more
advanced problems, increasing demands for computing and power resources has
become inevitable. Spiking neural networks (SNNs) have attracted widespread
interest as the third-generation of neural networks due to their event-driven
and low-powered nature. SNNs, however, are difficult to train, mainly owing to
their complex dynamics of neurons and non-differentiable spike operations.
Furthermore, their applications have been limited to relatively simple tasks
such as image classification. In this study, we investigate the performance
degradation of SNNs in a more challenging regression problem (i.e., object
detection). Through our in-depth analysis, we introduce two novel methods:
channel-wise normalization and signed neuron with imbalanced threshold, both of
which provide fast and accurate information transmission for deep SNNs.
Consequently, we present a first spiked-based object detection model, called
Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable
results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial
datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic
chip consumes approximately 280 times less energy than Tiny YOLO and converges
2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202
T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding
Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems. The deep neural network-to-SNN conversion approach has been widely studied to broaden the applicability of SNNs. Most previous studies, however, have not fully utilized spatio-temporal aspects of SNNs, which has led to inefficiency in terms of number of spikes and inference latency. In this paper, we present T2FSNN, which introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the aforementioned drawback. In addition, we propose gradient-based optimization and early firing methods to further increase the efficiency of the T2FSNN. According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding, which is the state-of-the-art result on the CIFAR-100.N
AutoSNN: Towards Energy-Efficient Spiking Neural Networks
Spiking neural networks (SNNs) that mimic information transmission in the
brain can energy-efficiently process spatio-temporal information through
discrete and sparse spikes, thereby receiving considerable attention. To
improve accuracy and energy efficiency of SNNs, most previous studies have
focused solely on training methods, and the effect of architecture has rarely
been studied. We investigate the design choices used in the previous studies in
terms of the accuracy and number of spikes and figure out that they are not
best-suited for SNNs. To further improve the accuracy and reduce the spikes
generated by SNNs, we propose a spike-aware neural architecture search
framework called AutoSNN. We define a search space consisting of architectures
without undesirable design choices. To enable the spike-aware architecture
search, we introduce a fitness that considers both the accuracy and number of
spikes. AutoSNN successfully searches for SNN architectures that outperform
hand-crafted SNNs in accuracy and energy efficiency. We thoroughly demonstrate
the effectiveness of AutoSNN on various datasets including neuromorphic
datasets.Comment: Accepted in ICML2